Stopping Rules for Gradient Methods for Non-convex Problems with Additive Noise in Gradient

نویسندگان

چکیده

We study the gradient method under assumption that an additively inexact is available for, generally speaking, non-convex problems. The non-convexity of objective function, as well use inexactness specified at iterations, can lead to various For example, trajectory may be far enough away from starting point. On other hand, unbounded removal in presence noise desired global solution. results investigating behavior are obtained and condition dominance. It known such a valid for many important Moreover, it leads good complexity guarantees method. A rule early stopping proposed. Firstly, achieving acceptable quality exit point terms function. Secondly, ensures fairly moderate distance this chosen initial position. In addition with constant step, its variant adaptive step size also investigated detail, which makes possible apply developed technique case unknown Lipschitz gradient. Some computational experiments have been carried out demonstrate effectiveness proposed methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Universal gradient methods for convex optimization problems

In this paper, we present new methods for black-box convex minimization. They do not need to know in advance the actual level of smoothness of the objective function. Their only essential input parameter is the required accuracy of the solution. At the same time, for each particular problem class they automatically ensure the best possible rate of convergence. We confirm our theoretical results...

متن کامل

Intermediate Gradient Methods for Smooth Convex Problems with Inexact Oracle

Between the robust but slow (primal or dual) gradient methods and the fast but sensitive to errors fast gradient methods, our goal in this paper is to develop first-order methods for smooth convex problems with intermediate speed and intermediate sensitivity to errors. We develop a general family of first-order methods, the Intermediate Gradient Method (IGM), based on two sequences of coefficie...

متن کامل

Gradient-based stopping rules for maximum-likelihood quantum-state tomography

When performing maximum-likelihood quantum-state tomography, one must find the quantum state that maximizes the likelihood of the state given observed measurements on identically prepared systems. The optimization is usually performed with iterative algorithms. This paper provides a gradient-based upper bound on the ratio of the true maximum likelihood and the likelihood of the state of the cur...

متن کامل

An Efficient Conjugate Gradient Algorithm for Unconstrained Optimization Problems

In this paper, an efficient conjugate gradient method for unconstrained optimization is introduced. Parameters of the method are obtained by solving an optimization problem, and using a variant of the modified secant condition. The new conjugate gradient parameter benefits from function information as well as gradient information in each iteration. The proposed method has global convergence und...

متن کامل

2013 / 17 Intermediate gradient methods for smooth convex problems with inexact oracle

Between the robust but slow (primal or dual) gradient methods and the fast but sensitive to errors fast gradient methods, our goal in this paper is to develop first-order methods for smooth convex problems with intermediate speed and intermediate sensitivity to errors. We develop a general family of first-order methods, the Intermediate Gradient Method (IGM), based on two sequences of coefficie...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Optimization Theory and Applications

سال: 2023

ISSN: ['0022-3239', '1573-2878']

DOI: https://doi.org/10.1007/s10957-023-02245-w